It is found that GPT-3 can create highly persuasive text as measured by participants’ agreement with propaganda theses, and it is suggested that propagandists could use AI to create convincing content with limited effort.
A programme of new "dangerous capability" evaluations are introduced and pilot them on Gemini 1.0 models to help advance a rigorous science of dangerous capability evaluation, in preparation for future models.
Inspired by Jürgen Habermas’s theory of communicative action, the “Habermas Machine” was designed to iteratively generate group statements that were based on the personal opinions and critiques from individual users, with the goal of maximizing group approval ratings.
The research emphasises the dominance of deep learning-based methods in detecting deepfakes despite their computational efficiency and generalisation limitations, however, it also acknowledges the drawbacks of these approaches, such as their limited computing efficiency and generalisation.
A big picture perspective of the deepfake paradigm is presented, by reviewing current and future trends, and delve into the potential that new technologies, such as distributed ledgers and blockchain, can offer with regard to cybersecurity and the fight against digital deception.
This work proposes ambitious yet achievable global targets for 2030 (relative to a prepandemic 2019 baseline): a 10% reduction in mortality from AMR; a 20% reduction in inappropriate human antibiotic use; and a 30% reduction in inappropriate animal antibiotic use.
A concisely survey the state-of-the-art of fair-AI methods and resources, and the main policies on bias in AI, with the aim of providing such a bird’s-eye guidance for both researchers and practitioners.
The findings demonstrate that LLMs can reproduce human-like behaviors, such as fairness, cooperation, and social norm adherence, while also introducing unique advantages such as cost efficiency, scalability, and ethical simplification.
This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios, and refine the proposed methodology by applying a proportionality test to balance the competing values involved in AI risk assessment.
The report synthesises the scientific understanding of general-purpose AI -- AI that can perform a wide variety of tasks -- with a focus on understanding and managing its risks.
US states that adopted abortion bans had higher than expected infant mortality after the bans took effect, and these increases were larger for deaths with congenital causes and among groups that had higher than average infant mortality rates at baseline.
The increase in suspected clade I cases in DRC raises concerns that the virus could spread to other countries and underscores the importance of coordinated, urgent global action to support DRC's efforts to contain the virus.
The historical context of using individual-level data in public health interventions is discussed and recent advancements in how data from human and pathogen genomics and social, behavioral and environmental research, as well as artificial intelligence, have transformed public health are examined.
This Viewpoint argues that evidence suggests a need to prioritise and support infection prevention interventions and to recalibrate global funding resources to address this need and calls on global leaders to redress the current response.
Article Galaxy Pages is a free service from Research Solutions, a company that offers access to content in collaboration with publishing partners, online repositories and discovery services.